11 research outputs found

    Fusing fine-tuned deep features for skin lesion classification

    Get PDF
    © 2018 Elsevier Ltd Malignant melanoma is one of the most aggressive forms of skin cancer. Early detection is important as it significantly improves survival rates. Consequently, accurate discrimination of malignant skin lesions from benign lesions such as seborrheic keratoses or benign nevi is crucial, while accurate computerised classification of skin lesion images is of great interest to support diagnosis. In this paper, we propose a fully automatic computerised method to classify skin lesions from dermoscopic images. Our approach is based on a novel ensemble scheme for convolutional neural networks (CNNs) that combines intra-architecture and inter-architecture network fusion. The proposed method consists of multiple sets of CNNs of different architecture that represent different feature abstraction levels. Each set of CNNs consists of a number of pre-trained networks that have identical architecture but are fine-tuned on dermoscopic skin lesion images with different settings. The deep features of each network were used to train different support vector machine classifiers. Finally, the average prediction probability classification vectors from different sets are fused to provide the final prediction. Evaluated on the 600 test images of the ISIC 2017 skin lesion classification challenge, the proposed algorithm yields an area under receiver operating characteristic curve of 87.3% for melanoma classification and an area under receiver operating characteristic curve of 95.5% for seborrheic keratosis classification, outperforming the top-ranked methods of the challenge while being simpler compared to them. The obtained results convincingly demonstrate our proposed approach to represent a reliable and robust method for feature extraction, model fusion and classification of dermoscopic skin lesion images

    Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification

    No full text
    Background and objective: Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. Methods: We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. Results: Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. Conclusions: We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance

    Investigating the impact of the bit depth of fluorescence-stained images on the performance of deep learning-based nuclei instance segmentation

    No full text
    Nuclei instance segmentation can be considered as a key point in the computer-mediated analysis of histological fluorescence-stained (FS) images. Many computer-assisted approaches have been proposed for this task, and among them, supervised deep learning (DL) methods deliver the best performances. An important criterion that can affect the DL-based nuclei instance segmentation performance of FS images is the utilised image bit depth, but to our knowledge, no study has been conducted so far to investigate this impact. In this work, we released a fully annotated FS histological image dataset of nuclei at different image magnifications and from five different mouse organs. Moreover, by different pre-processing techniques and using one of the state-of-the-art DL-based methods, we investigated the impact of image bit depth (i.e., eight bits vs. sixteen bits) on the nuclei instance segmentation performance. The results obtained from our dataset and another publicly available dataset showed very competitive nuclei instance segmentation performances for the models trained with 8 bit and 16 bit images. This suggested that processing 8 bit images is sufficient for nuclei instance segmentation of FS images in most cases. The dataset including the raw image patches, as well as the corresponding segmentation masks is publicly available in the published GitHub repository

    A dual decoder U-Net-based model for nuclei instance segmentation in hematoxylin and eosin-stained histological images

    No full text
    Even in the era of precision medicine, with various molecular tests based on omics technologies available to improve the diagnosis process, microscopic analysis of images derived from stained tissue sections remains crucial for diagnostic and treatment decisions. Among other cellular features, both nuclei number and shape provide essential diagnostic information. With the advent of digital pathology and emerging computerized methods to analyze the digitized images, nuclei detection, their instance segmentation and classification can be performed automatically. These computerized methods support human experts and allow for faster and more objective image analysis. While methods ranging from conventional image processing techniques to machine learning-based algorithms have been proposed, supervised convolutional neural network (CNN)-based techniques have delivered the best results. In this paper, we propose a CNN-based dual decoder U-Net-based model to perform nuclei instance segmentation in hematoxylin and eosin (H&E)-stained histological images. While the encoder path of the model is developed to perform standard feature extraction, the two decoder heads are designed to predict the foreground and distance maps of all nuclei. The outputs of the two decoder branches are then merged through a watershed algorithm, followed by post-processing refinements to generate the final instance segmentation results. Moreover, to additionally perform nuclei classification, we develop an independent U-Net-based model to classify the nuclei predicted by the dual decoder model. When applied to three publicly available datasets, our method achieves excellent segmentation performance, leading to average panoptic quality values of 50.8%, 51.3%, and 62.1% for the CryoNuSeg, NuInsSeg, and MoNuSAC datasets, respectively. Moreover, our model is the top-ranked method in the MoNuSAC post-challenge leaderboard

    CryoNuSeg: a dataset for nuclei instance segmentation of cryosectioned H&E-stained histological images

    No full text
    Nuclei instance segmentation plays an important role in the analysis of hematoxylin and eosin (H&E)-stained images. While supervised deep learning (DL)-based approaches represent the state-of-the-art in automatic nuclei instance segmentation, annotated datasets are required to train these models. There are two main types of tissue processing protocols resulting in formalin-fixed paraffin-embedded samples (FFPE) and frozen tissue samples (FS), respectively. Although FFPE-derived H&E stained tissue sections are the most widely used samples, H&E staining of frozen sections derived from FS samples is a relevant method in intra-operative surgical sessions as it can be performed more rapidly. Due to differences in the preparation of these two types of samples, the derived images and in particular the nuclei appearance may be different in the acquired whole slide images. Analysis of FS-derived H&E stained images can be more challenging as rapid preparation, staining, and scanning of FS sections may lead to deterioration in image quality. In this paper, we introduce CryoNuSeg, the first fully annotated FS-derived cryosectioned and H&E-stained nuclei instance segmentation dataset. The dataset contains images from 10 human organs that were not exploited in other publicly available datasets, and is provided with three manual mark-ups to allow measuring intra-observer and inter-observer variabilities. Moreover, we investigate the effects of tissue fixation/embedding protocol (i.e., FS or FFPE) on the automatic nuclei instance segmentation performance and provide a baseline segmentation benchmark for the dataset that can be used in future research. A step-by-step guide to generate the dataset as well as the full dataset and other detailed information are made available to fellow researchers at https://github.com/masih4/CryoNuSeg

    TissueQuest-based characterization of infected macrophages.

    No full text
    <p>Macrophages were cultured in 96-well plates in the presence of <i>L</i>. <i>major</i> parasites (parasite/macrophage ratio 5:1). (A) The infected macrophages were determined automatically by the TissueQuest software. The circle diagrams depict the distribution of infected (black) and not infected macrophages (grey). The infection rate is indicated within the circle diagram. A representative experiment visualizing the infection rate after measuring 310 or 11237 macrophages is shown. (B) The box-plots depict the TissueQuest based quantification of the average number of intracellular parasites per macrophage after analysis of 237 or 8639 macrophages. The gray boxes indicate the range between the first and third quartile. The horizontal lines indicate the median. The whiskers visualize the spread of data. The red bracket highlights the additional information that was generated after measuring 8639 infected macrophages compared to analyzing 237 infected macrophages (***<0.0001). The data shown in (A) and (B) are representative for three experiments. (C) Macrophages were analyzed regarding their number of intracellular parasites and the amount of macrophages (y-axis) harboring the indicated number of parasites (x-axis) are shown. Data are presented in box-blots. The circle diagram highlights the number of macrophages harboring 1–6 (light gray), 7–12 (dark gray) and 12–20 (black) parasites. Data are representative for three experiments. (D) Validation of TissueQuest-based quantification of parasites by real-time PCR was performed. Macrophages were infected with different numbers of parasites per host cell (1:1 and 5:1) and 96 hours post infection the average number of parasites per macrophage (right y-axis) and the number of <i>L</i>. <i>major</i> parasites per β-actin was determined (left y-axis) (*<0,01, *** 0,0001; n = 3; mean +/- SD).</p

    Nuclei detection and ring mask calculation by TissueQuest.

    No full text
    <p>Single channel greyscale pictures were processed by TissueQuest software. (A) The nuclei detection was performed according the following parameters (nuclei size: 10, remove small objects: 1, remove weakly stained objects: 1, automated background: no, automated threshold: 5, virtual channel: no, post processing order: remove, merge and Remove labels i) smaller then: 30 μm, larger then: 100 μm, weaker then: 137, stronger then: do not use; use merging rules: no). The detected nuclei are automatically surrounded by a red line. (B) The ring masks were created according the following parameters (interior radius: -0,31 μm, exterior radius; 12,74 μm, use identification cell mask: no, use nucleus mask: no, background threshold: 5; see also <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0139866#pone.0139866.s001" target="_blank">S1 Fig</a>). The white insert highlights one infected macrophage. The white arrows depict some of the parasites within the ring mask. (C) The area within the ring mask is highlighted in yellow and represents the region of interest in which the screening of parasites can be performed automatically. Representative pictures out of 3 experiments are shown. Bar = 10μm.</p

    Detection of intracellular parasites within the ring mask.

    No full text
    <p>Macrophages were cultured in 96-well cultures in the presence (plus <i>L</i>. <i>major</i>, right row) or absence of <i>L</i>. <i>major</i> parasites (w/o <i>L</i>. <i>major</i>, left row). After 72h a DAPI staining was performed to visualize nuclei of the mammalian cells and the DNA rich areas of the parasites simultaneously. Single channel greyscale pictures were processed by TissueQuest software. The ring masks were created as described in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0139866#pone.0139866.g001" target="_blank">Fig 1</a> and material and methods. (A) The ring mask (highlighted in blue) visualizes the cytoplasmatic area of a representative macrophage that was cultured in the absence of parasites. (B) One representative macrophage infected with <i>L</i>. <i>major</i> parasites is shown including the ring mask which is highlighted in blue. Visually determined DAPI positive signals are marked with yellow arrows. For the detection of parasites within the ring masks a virtual channel of the parasite-associated fluorochrome (in our case DAPI) has to be created (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0139866#pone.0139866.s002" target="_blank">S2C Fig</a>). The following instrument settings were used: Use ring mask: yes, interior radius: -0,31 μm, exterior radius: 12,74 μm, use identification mask: no, use nucleus mask: no, background threshold: 5. (C) One algorithm developed for the detection of weak signals recognizes false positive signals (yellow squares) within the ring mask (the cell shown in A is highlighted in blue) of macrophages that are not infected. (D) False positive signals (yellow squares) are created. The representative (highlighted in blue) macrophage harbors more than 30 parasites, whereas no more than 10 can be visually determined (see B). (E) The algorithm that was developed to detect weak signals recognizes no false positive signals in the ring mask of macrophages that are not infected. (F) All parasites within infected macrophages are recognized (see yellow squares and yellow arrows in (B)). Representative pictures out of 3 experiments are shown. Bar = 10 μm.</p

    Phenotypical analysis of macrophages harboring different numbers of parasites.

    No full text
    <p>Macrophages were cultured in 96-well plates in the presence of <i>L</i>. <i>major</i> parasites (parasite/macrophage ratio 5:1). After 72 hours the cells were stained with F4/80<sup>Cy5</sup>, CD11b<sup>biotinylated</sup> + Streptavidin<sup>AF546</sup>. Macrophage nuclei and parasite DNA rich areas were stained with DAPI. (A) The TissueQuest-based analysis of cells and intracellular parasites are presented. A histogram plot depicts the number of parasites within the macrophages (x-axis) and the number of macrophages harboring the indicated (see x-axis) number of <i>Leishmania</i> parasites. Cells within the highlighted gates were further analyzed regarding the expression of CD11b and F4/80. (B) The mean intensities of CD11b or F4/80 are plotted representing macrophages harboring no (#0), one (#1), six (#6) or ten (#10) parasites. Statistical analyses were performed with GraphPad Prism using the nonparametic Mann-Whitney test (p**<0,01, p*<0,05; red horizontal line represents the median). Every single dot represents one individual analyzed nucleated cell. One representative set of data out of two experiments is shown.</p

    Assessment of parasite load <i>in situ</i>.

    No full text
    <p>The skin-draining lymph nodes of infected BALB/c mice were removed 20 days after infection. Cryosections were stained with DAPI and analyzed by fluoresence microscopy. (A) One representative region of the infected lymph node is shown. The gray scale image depicts DAPI<sup>+</sup> host cell nuclei and parasite DNA. (B) An overlay of the detection of nuclei (green), cytoplasm (orange), and parasites (magenta) is shown. (C) Backward gating visualizes cells harboring 3 and (D) 5–10 parasites (highlighted in magenta). E) The graph represents the parasite load (y-axis) within infected cells. The horizontal line represents the median and the arrow bars display the interquartile range. Each dot represents a host cell harboring the indicated (y-axis) amount of parasites.</p
    corecore